Microsoft Unveils In-House <span style='color:red'>AI Chip</span>, Poised for Competitive Edge with a Powerful Ecosystem
  Printed Circuit Boards (PCBs) are the fundamental building blocks of modern electronics, comprising a myriad of components meticulously arranged to enable the functionality of electronic devices. Identifying these components on a PCB and understanding their roles is essential for troubleshooting, repairs, and even designing electronic circuits.  In this comprehensive guide, we delve into the world of PCB components, unraveling their types, functionalities, and methods of identification.  Understanding PCB Components1. Resistors:  • Identification: Resistors are usually small, cylindrical components with colored bands indicating resistance values. Use a multimeter to measure resistance if the bands are unclear.  • Function: Resistors limit current flow in a circuit, adjusting voltage levels or protecting components.  2. Capacitors:  • Identification: Capacitors come in various shapes (cylindrical, rectangular) and sizes, often labeled with capacitance values and voltage ratings.  • Function: They store and release electrical energy, filtering signals or stabilizing voltage.  3. Diodes:  • Identification: Diodes appear as small cylindrical or square-shaped components with a stripe indicating polarity.  • Function: They allow current flow in one direction, blocking it in the opposite direction.  4. Transistors:  • Identification: Transistors come in different shapes (often three-legged), with part numbers indicating their type.  • Function: They amplify or switch electronic signals, serving as the basic building blocks of electronic devices.  5. Integrated Circuits (ICs):  • Identification: ICs are rectangular components with multiple pins. The part number often includes information about the manufacturer and type.  • Function: ICs integrate various functions (logic, memory, amplification) into a single package.  6. Inductors:  • Identification: Inductors resemble wire coils and are labeled with inductance values.  • Function: They store energy in a magnetic field and resist changes in current flow.  7. Connectors and Headers:  • Identification: Connectors are ports or slots for external connections. Headers are sets of pins for internal connections.  • Function: They facilitate the connection of external components or other PCBs.  Techniques for Identifying PCB componentsVisual Inspection:  Markings and Labels: Many components have printed markings indicating their values, part numbers, or manufacturers.  Physical Characteristics: Size, shape, and color often provide clues about a component’s type and function.  Multimeter and Testing:  Resistance Measurement: Use a multimeter in resistance mode to identify resistors and check for their values.  Capacitance Measurement: Multimeters with capacitance measuring capabilities can identify capacitors.  Datasheets and Component Manuals:  Online Resources: Manufacturers provide datasheets detailing component specifications and identification information.  Component Manuals: Some components have manuals with comprehensive details for identification.  Challenges and Conclusion  Identifying PCB components can present challenges due to the sheer diversity of shapes, sizes, and labeling conventions across manufacturers. Furthermore, miniaturization and surface-mount technology have made identification more intricate.  In conclusion, mastering the identification of PCB components is a foundational skill for electronics enthusiasts, engineers, and technicians. Utilizing a combination of visual inspection, testing tools, datasheets, and experience will empower individuals to decipher the complexities of PCBs, enabling effective troubleshooting, circuit design, and maintenance within the dynamic landscape of electronics.
Key word:
Release time:2023-11-23 13:19 reading:2153 Continue reading>>
Microsoft First In-House <span style='color:red'>AI Chip</span> “Maia” Produced by TSMC’s 5nm
  On the 15th, Microsoft introducing its first in-house AI chip, “Maia.” This move signifies the entry of the world’s second-largest cloud service provider (CSP) into the domain of self-developed AI chips. Concurrently, Microsoft introduced the cloud computing processor “Cobalt,” set to be deployed alongside Maia in selected Microsoft data centers early next year. Both cutting-edge chips are produced using TSMC’s advanced 5nm process, as reported by UDN News.  Amidst the global AI fervor, the trend of CSPs developing their own AI chips has gained momentum. Key players like Amazon, Google, and Meta have already ventured into this territory. Microsoft, positioned as the second-largest CSP globally, joined the league on the 15th, unveiling its inaugural self-developed AI chip, Maia, at the annual Ignite developer conference.  These AI chips developed by CSPs are not intended for external sale; rather, they are exclusively reserved for in-house use. However, given the commanding presence of the top four CSPs in the global market, a significant business opportunity unfolds. Market analysts anticipate that, with the exception of Google—aligned with Samsung for chip production—other major CSPs will likely turn to TSMC for the production of their AI self-developed chips.  TSMC maintains its consistent policy of not commenting on specific customer products and order details.  TSMC’s recent earnings call disclosed that 5nm process shipments constituted 37% of Q3 shipments this year, making the most substantial contribution. Having first 5nm plant mass production in 2020, TSMC has introduced various technologies such as N4, N4P, N4X, and N5A in recent years, continually reinforcing its 5nm family capabilities.  Maia is tailored for processing extensive language models. According to Microsoft, it initially serves the company’s services such as $30 per month AI assistant, “Copilot,” which offers Azure cloud customers a customizable alternative to Nvidia chips.  Borkar, Corporate VP, Azure Hardware Systems & Infrastructure at Microsoft, revealed that Microsoft has been testing the Maia chip in Bing search engine and Office AI products. Notably, Microsoft has been relying on Nvidia chips for training GPT models in collaboration with OpenAI, and Maia is currently undergoing testing.  Gulia, Executive VP of Microsoft Cloud and AI Group, emphasized that starting next year, Microsoft customers using Bing, Microsoft 365, and Azure OpenAI services will witness the performance capabilities of Maia.  While actively advancing its in-house AI chip development, Microsoft underscores its commitment to offering cloud services to Azure customers utilizing the latest flagship chips from Nvidia and AMD, sustaining existing collaborations.  Regarding the cloud computing processor Cobalt, adopting the Arm architecture with 128 core chip, it boasts capabilities comparable to Intel and AMD. Developed with chip designs from devices like smartphones for enhanced energy efficiency, Cobalt aims to challenge major cloud competitors, including Amazon.
Key word:
Release time:2023-11-17 16:00 reading:2378 Continue reading>>
What are AI chips What types of mainstream AI chips are there
  AI chips are not equivalent to GPUs (graphics processing units). Although GPUs can be used to perform some AI computing tasks, AI chips are chips that are specifically designed and optimized for artificial intelligence computing.  First of all, the GPU was originally designed for graphics processing, and its main function is to process images, render graphics, and graphics acceleration. It features massively parallel processing units and a high-bandwidth memory system to meet image processing and computing needs. Since artificial intelligence computing also requires large-scale parallel computing, GPU has played a certain role in the field of AI.  However, compared with traditional general-purpose processors, AI chips have some specific designs and optimizations to better meet the needs of artificial intelligence computing. Here are some key differences between AI chips and GPUs:  1. Architecture design: AI chips are different from GPUs in architecture design. AI chips typically have dedicated hardware accelerators for performing common AI computing tasks, such as matrix operations and neural network operations. These hardware accelerators can provide higher computing performance and energy efficiency to meet the requirements of artificial intelligence computing.  2. Computing optimization: The design of AI chips focuses on optimizing computing-intensive tasks, such as the training and reasoning of deep learning models. They usually use specific instruction sets and hardware structures to accelerate common calculations such as matrix multiplication, convolution operations, and vector operations. Compared with this, the design of GPU focuses more on graphics processing and general computing, and may not be so efficient for some AI computing tasks.  3. Energy efficiency and power consumption: AI chips usually have high energy efficiency and low power consumption to meet the needs of large-scale AI computing and edge devices. They employ several power-saving techniques and optimization strategies to reduce power consumption while maintaining performance. In contrast, GPUs may require more power when handling complex graphics tasks.  4. Customization and flexibility: AI chips are usually designed for specific AI application scenarios and can be customized and developed according to specific computing needs. This custom design can provide better performance and effects, while GPU is a general-purpose processor suitable for a wide range of computing tasks.  What types of mainstream AI chips are there?  1. Graphics Processing Unit (GPU): GPU was originally designed for graphics processing, but due to its highly parallel computing capabilities, it is gradually being used to accelerate AI computing tasks. NVIDIA’s GPUs are widely used in the field of AI computing, such as NVIDIA Tesla series and GeForce series.  2. Application-Specific Integrated Circuit (ASIC): ASIC is a specially customized chip optimized for a specific application. In the field of AI, ASIC chips, such as Google’s Tensor Processing Unit (TPU) and Bitmain’s ASIC chips, have efficient AI computing capabilities.  3. Field-Programmable Gate Array (FPGA): FPGA is a reconfigurable hardware platform that allows users to customize programming according to specific needs. In AI computing, FPGA can be optimized according to different neural network architectures, with flexibility and scalability.  4. Neural Processing Unit (NPU): NPU is a chip specially designed for neural network computing tasks. They usually have a highly parallel structure and specialized instruction sets to accelerate the training and inference of neural network models. Huawei’s Kirin NPU and Asus’ Thinker series chips are common NPUs.  5. Edge AI Chips: Edge AI chips are AI chips specially designed for edge computing devices, such as smartphones, Internet of Things devices, and drones. These chips typically feature low power consumption, high energy efficiency, and small size to suit edge devices. For example, Qualcomm’s Snapdragon series chips integrate AI acceleration.  Leading companies and products of AI chips  1. Huawei  Kirin NPU: Huawei’s Kirin chip series integrates its own NPU to provide efficient AI computing capabilities. These chips are widely used in Huawei’s smartphones and other devices.  2 NVIDIA  GPU products: NVIDIA’s GPU series include GeForce, Quadro and Tesla, among which Tesla series GPUs are widely used in deep learning and AI computing.  Tensor Core: NVIDIA’s Tensor Core is a hardware unit specially designed to accelerate deep learning calculations, integrated in its GPU.  3. Google  Tensor Processing Unit (TPU): The TPU developed by Google is an ASIC chip specially used to accelerate artificial intelligence calculations. TPUs are widely used in Google’s data centers to accelerate machine learning tasks and inference workloads.  4. Intel  Intel Nervana Neural Network Processor (NNP): Intel NNP is an ASIC chip designed for deep learning reasoning. It has a highly parallel architecture and optimized neural network computing power.  5. AMD  Radeon Instinct: AMD’s Radeon Instinct series of GPUs are designed for high-performance computing and deep learning tasks. These GPUs have powerful parallel computing capabilities and support deep learning frameworks and tools.  6. Apple  Apple Neural Engine: Apple has integrated Neural Engine in its A-series chips, which is a hardware accelerator dedicated to speeding up machine learning and AI tasks. It is used to support functions such as face recognition and voice recognition.
Key word:
Release time:2023-09-25 15:39 reading:2224 Continue reading>>
Intel, Facebook working on cheaper AI chip
 Intel and Facebookare working together on a new cheaper Artificial Intelligence (AI) chip that will help companies with high workload demands.At the CES 2019 here on Monday, Intel announced "Nervana Neural Network Processor for Inference" (NNP-I)."This new class of chip is dedicated to accelerating inference for companies with high workload demands and is expected to go into production this year," Intel said in a statement.Facebook is also one of Intel's development partners on the NNP-I.Navin Shenoy, Intel Executive Vice President in the Data Centre Group, announced that the NNP-I will go into production this year.The new "inference" AI chip would help Facebook and others deploy machine learning more efficiently and cheaply.Intel began its AI chip development after acquiring Nervana Systems in 2016.Intel also announced that with Alibaba, it is developing athlete tracking technology powered by AI that is aimed to be deployed at the Olympic Games 2020 and beyond.The technology uses existing and upcoming Intel hardware and Alibaba cloud computing technology to power a cutting-edge deep learning application that extracts 3D forms of athletes in training or competition."This technology has incredible potential as an athlete training tool and is expected to be a game-changer for the way fans experience the Games, creating an entirely new way for broadcasters to analyse, dissect and re-examine highlights during instant replays," explained Shenoy.Intel and Alibaba, together with partners, aim to deliver the first AI-powered 3D athlete tracking during the Olympic Games Tokyo 2020."We are proud to partner with Intel on the first-ever AI-powered 3D athlete tracking technology where Alibaba contributes its best-in-class cloud computing capability and algorithmic design," said Chris Tung, CMO, Alibaba Group. 
Key word:
Release time:2019-01-09 00:00 reading:3146 Continue reading>>
China responds to US tariff with their own charges and new AI chip
Broadcom to Help Design Wave’s 7-nm <span style='color:red'>AI Chip</span>
Wave Computing has set its sights on becoming the first AI startup to develop and deploy a 7-nm AI processor in its AI systems.EE Times has learned that Wave has snagged Broadcom Inc. as an ASIC designer for the new 7-nm project. The two companies will collaborate on development of Wave’s next-generation Dataflow Processing Unit (DPU) by using Taiwan Semiconductor Manufacturing Co.’s 7-nm process node.The new 7-nm DPU — scheduled for delivery by Broadcom at an undisclosed date — will be “designed into our own AI system,” confirmed Wave’s CEO, Derek Meyer. He added that the same chip may become available to others “if there is a market demand.”“Wave is hoping to get a jump on the startup competition with a 7-nm part,” observed Kevin Krewell, principal analyst at Tirias Research. “Most startups don’t have the expertise to build a 7-nm part just yet.” He explained that Broadcom’s involvement made this possible. Broadcom, he noted, “does have more senior ASIC circuit design experience through the acquisition of LSI Logic.”Wave’s current-generation DPU is based on a 16-nm process design.“Among our peers who are designing a new breed of AI accelerators, we will be the first to have access to 7-nm physical IP — such as 56-Gbps and 112-Gbps SerDes — thanks to Broadcom,” noted Meyer. Broadcom is “instrumental to bringing this 7-nm project to fruition,” he explained, thanks to “their industry-leading design platform, productization skills, and proven 7-nm IPs.”Wave’s current-generation DPU based on 16-nm process node was designed by Wave’s employees with the help of contractors. As for the 7-nm DPU, Meyer said, “Between Broadcom and Wave, we have sketched out skills and resources that will be necessary to both front-end and back-end [of the ASIC] designs. We devised our plans for collaboration accordingly.”The joint 7-nm project has been up and running for several months. Broadcom will manage physical delivery of the 7-nm chip. Despite the complexity of 7-nm designs, Meyer said, “I am confident that Broadcom will deliver the first-time right chip.” Wave, however, declined to comment on when its 7-nm DPU will become available.What’s in the 7-nm DPU?Wave did not reveal the architecture of its 7-nm DPU, either.Meyer, however, explained that the new chip will be “based on the data flow architecture.” It will be the first DPU featuring “64-bit MIPS multithreaded CPU.” Wave acquired MIPS in June.Meyer also indicated that Wave’s 7-nm chip will come with “new features in memory,” but he refrained from disclosing what exactly those features are.MIPS’s multithreading technology will play a key role in the new-generation DPU, according to Meyer. In Wave’s dataflow processing, “when we load, unload, and reload data for machine-learning agents, hardware multithreading architecture is effective.” MIPS’s cache coherence is another positive for Wave’s new DPU. “Because our DPU is 64-bit, it only makes sense that both MIPS and DPU talk to the same memory in 64-bit address space,” he said.Asked about Wave’s new features in memory, Krewell said, “Wave’s present chip uses Micron’s Hybrid Memory Cube. And I believe Wave will move to high-bandwidth memory (HBM) in future chips.” He added, “There’s a much better roadmap for HBM. The changing memory architecture will have an impact on the overall system architecture.”Karl Freund, senior analyst at Moor Insights & Strategy, concurred. He said, “For memory, I suspect they will abandon the Hybrid Memory Cube and adopt high-bandwidth memory, which is more cost-effective.”During the interview, Meyer boasted that the new 7-nm DPU should be able to offer 10 times the performance of the company’s current chip.“Remember, we separated the clocks from our chips” in the DPU architecture, he said. Noting that going back and forth to a host creates a bottleneck, he explained that in DPU, an embedded microcontroller loads instructions, cutting down on power and latency wasted by traditional accelerators. “We can take advantage of that capacity available for transistors on the 7-nm chip to increase the performance.”Krewell remained a little skeptical. “As to whether Wave can make a 10x leap, that’s a long reach.” He said, “It depends on how machine-learning performance is measured … and whether Derek [Meyer] was talking training or inference.” He added, “There are a lot of changes going on in inference, with lower-precision (8-bit and below) algorithms being deployed. Training performance is heavily memory-architecture- dependent.”He acknowledged, “But I don’t know the details of what Wave has planned.”
Key word:
Release time:2018-08-02 00:00 reading:3731 Continue reading>>
<span style='color:red'>AI Chip</span> Startup SambaNova Snags $56 Million in Funding
  Artificial intelligence chip startup SambaNova has taken on $56 million in Series A funding from of investors led by Google's venture capital arm and Walden International.  SambaNova (Palo Alto, Calif.), founded last year by a pair of Stanford University professors and the former head of processor development at Oracle and Sun Microsystems, is based largely on DARPA-funded research by the two professors on efficient AI processing.  Kunle Olukotun, a SambaNova co-founder and the company's chief technology officer, said in a press statement that the company's innovations in machine-learning algorithms and software-defined hardware would dramatically improve the performance and capability of intelligent applications.  "The flexibility of the SambaNova technology will enable us to build a unified platform providing tremendous benefits for business intelligence, machine learning and data analytics,” said Olukotun, a pioneer in multi-core processing and a recent winner of the IEEE Computer Society’s Harry H. Goode Memorial Award.  Chris Ré, SambaNova's other Stanford processor co-founder is known for his work in database theory and has been recognized with several awards, including a MacArthur Genius Award. Joint work by Kunle and Chris on converged analytics has also recently won several awards, according to SambaNova.  The third co-founder is Rodrigo Liang, former vice president of processor development at Oracle, who serves as SambaNova's CEO. "We have exposed our technology to some of the world’s largest companies across different industries and we are excited about the broad applicability of our technology from enterprise to the edge," Liang said.  In addition to Walden International and GV (formerly known as Google Ventures), venture capital firms Redline Capital and Atlantic Bridge Ventures also invested in SambaNova's Series A funding round.  SambaNova becomes the latest in a group of startups to receive funding in the red-hot AI chip space. The group also includes Silicon Valley startups Cerebras Systems and Wave Computing, UK-based Graphcore, and others.  As part of the funding arrangement, semiconductor industry veteran Lip-Bu Tan, the chairman of Walden International and the CEO of EDA vendor Cadence Design Systems, has taken the role of chairman of SambaNova.  "SambaNova has gathered a dream team of Stanford professors, PhDs and industry veterans who profoundly understand how to cooperatively optimize AI applications, machine learning algorithms, systems software, hardware architecture and silicon implementation to create a new platform for intelligent applications with exceptional capabilities," Tan said in a statement.
Key word:
Release time:2018-03-22 00:00 reading:1399 Continue reading>>
Intel Touts Auto <span style='color:red'>AI Chip</span>’s Efficiency
  Brian Krzanich, Intel’s CEO came to Los Angeles for the “AutoMobility LA” auto show flush with forecasts of how autonomous driving will change every aspect of future vehicles, from cabin design to entertainment and life-saving safety systems.  As a leading autonomous vehicle chip company, Intel also seized the moment to set the record straight on the efficiency of the EyeQ 5 chip, developed by Mobileye (now an Intel company), compared to Nvidia’s Drive PX Xavier SoC designed for autonomous driving.  During his speech, Krzanich, referring to the recently completed Mobileye acquisition, stressed that Intel “can deliver more than twice the deep-learning performance efficiency than the competition [meaning Nvidia].”  As Nvidia strives to brand itself as a leader in AI-based autonomous driving technology through relentless promotion of its Drive PX platform, Intel appears to have decided to charge full-tilt into the brewing battle of specsmanship.  Incorrectly quoted  Speaking with EE Times, Jack Weast, Intel's principal engineer and chief architect of autonomous driving solutions, described Intel as a company that tends to lean conservative when it comes to touting its chips’ performance. As the war of words escalates, though, Weast said, “We are tired of seeing us incorrectly quoted.”  Weast complained that its rivals and the media often incorrectly compared Nvidia’s Drive PX to Intel’s desktop PC chips. In an apple-to-apple comparison, Mobileye’s 5th generation vision-sensor fusion chip must instead be compared, he said, with Nvidia’s Xavier SoC. EyeQ5 delivers 24 trillion operations per second (TOPS) at 10 watts, Weast said. In contrast, the DRIVE PX Xavier offers 30 TOPS of performance, while consuming 30 watts of power. “We are 2.4 times more efficient,” said Weast.  Of course, Nvidia is now promoting its latest Pegasus SoC, scheduled for delivery in 2018, designed to perform 320 TOPS — more than 10x the performance of its predecessor — at 500 watts of power. “Pegasus is new, but its efficiency isn’t getting better,” Weast said.  Nvidia’s Pegasus couples two of NVIDIA's Xavier SoCs with two next-generation discrete GPUs with hardware acceleration.  The mystery that remains, even to some experts, is how Intel plans to combine Mobileye’s “eye” with an Intel microprocessor “brain” in a highly automated vehicle.  Intel, in fact, might partly share the blame for the market’s confusion. The CPU giant has remained silent about what sort of SoCs it’s been developing on its own, separate from Mobileye’s EyeQ5.  Intel to launch multi-chip platform  According to Weast, Intel is planning to unveil soon — leading up to the Consumer Electronics Show in January — “a multi-chip platform for autonomous driving.” The solution will combine the EyeQ 5 SoC, Intel’s low-power Atom SoCs, and other hardware including I/O and Ethernet connectivity, he explained.  When Intel unveiled its GO development platform for autonomous driving earlier this year, it described its Atom processor C3000 as a chip that “delivers high performance per watt, packing substantial compute into low-power designs.”  Asked how the Atom SoC shares processing tasks with EyeQ 5, Weast said, “We looked at the entire set of workload necessary for autonomous vehicles.” Then, he said, “We allocated and partitioned the compute loads” among multiple chips.  Asked if FPGA is a part of that multi-chip solution, Weast said no. “There are some customers looking at FPGAs for certain applications such custom I/O or security, but it’s not part of our new multi-chip platform.”  Division of labor  When Mobileye originally announced EyeQ 5 before being acquired by Intel, the Israeli company touted the new SoC as a “brain” of autonomous vehicles, tasked to do “the vision central computer performing sensor fusion” for fully autonomous driving (Level 5) vehicles.  If so, where does Intel’s Atom SoC come in?  Autonomous driving requires different levels of sensor fusion, Weast explained. In deep-learning acceleration, some sensor fusion demands a chip to process highly parallel multi-threaded chunks of codes. “For that, EyeQ 5 is ideal.” Meanwhile, there is also a need for higher-level, environmental sensor fusion, which looks at trajectories and validations, Weast explained. “A CPU is a better fit to perform such tasks.”  In Intel’s view, Weast said, in enabling highly automated driving, “We didn’t have to cram everything into one SoC, such as EyeQ 5.” Intel had “the luxury of an opportunity to figure out where the spare cycle is, and how the compute workload should be partitioned,” he explained.  As soon as the Mobileye deal was completed last August, everyone on the team “immediately dove into the project,” Weast said.  Intel will shortly detail its multi-chip platform designed for autonomous driving, Weast promised. “At that point, we should be able to offer a platform-level comparison” between Nvidia’s Drive PX and Intel’s solution, he said.
Key word:
Release time:2017-12-01 00:00 reading:1380 Continue reading>>
<span style='color:red'>AI Chip</span> Startup Graphcore Lands $50 Million in Funding
  Graphcore, a developer of processors for machine learning and artificial intelligence, secured $50 million in additional funding, bringing the total raised by the UK-based startup to about $110 million over the past 18 months.  Graphcore's series C funding, provided by venture firm Sequoia Capital, will be used to scale up production of the startup's first chip, which it calls an Intelligence Processing Unit (IPU). Graphcore plans to make the IPU available to early access customers at the beginning of next year.  In addition to scaling up production, the new funding will be used to help build a community of developers around Graphcore's Poplar software platform, driving the company's extended product roadmap and investing in its Palo Alto, Calif.-based U.S. team to help support customers, Graphcore (Bristol, U.K.) said.  "Efficient AI processing power is rapidly becoming the most sought-after resource in the technological world," said Nigel Toon, Graphcore's CEO, in a press statement. "We believe our IPU technology will become the worldwide standard for machine intelligence compute."  Graphcore, which is featured in the most recent edition of EE Times Silicon 60, is perhaps the furthest along of a crop of startups that have been formed to create new processor architectures for deep neural networks (DNNs). In addition to Graphcore, other well-funded startups include Wave Computing, Cerebras and Groq.  These startups and others are in the early days of battle with the likes of more established companies such as Google, which has offered its Tensor Processing Unit (TPU) custom ASIC for machine learning since last year, Nvidia GPUs and Intel, which acquired Nervana and plans to sample its Neural Network Processor ASSP next year.  Toon maintains that the performance of Graphcore’s processor "is going to be transformative" compared to other accelerators. The company last month shared preliminary benchmarks that it says demonstrate that the IPU can improve performance of machine intelligence training and inference workloads by 10-100 times compared with current hardware.  Previous investors in Graphcore include both Samsung Catalyst Fund, the venture capital arm of Samsung, and Robert Bosch Venture Capital.  Matt Miller, a partner at Sequoia, will join Graphcore's board of directors as the result of the funding, the company said. Bill Coughran, another partner at Sequoia, will join Graphcore's technical advisory board, the company said.
Key word:
Release time:2017-11-15 00:00 reading:1612 Continue reading>>
AMD Copilots Tesla <span style='color:red'>AI Chip</span>, Says Report
  Tesla is testing samples of a machine-learning chip that it developed in collaboration with Advanced Micro Devices, according to a report from CNBC. AMD and Tesla both declined to comment on the story.  The chip was developed by Tesla’s Autopilot group, a team of about 50 engineers under Jim Keller, a veteran microprocessor designer who led work on AMD’s Zen x86 processor. The chip is expected to replace an Nvidia GPU that Tesla currently uses, which itself replaced a Mobileye chip, said the CNBC report.  Keller joined Tesla in January 2016, the Electrek online automotive news site reported at the time. Within four months, Tesla hiredseveral of Keller’s former colleagues, including processor veteran Peter Bannon and several senior AMD engineers.  A quick check of LinkedIn shows that at least a dozen senior AMD employees joined Tesla in the first months of 2016. Among the most senior was engineer-turned-strategist Keith Witek, Tesla’s director of business development who left AMD after 14 years, ending as its corporate business development strategist.  The rest of the former AMD group are engineers now generally working on silicon technology for Tesla’s Autopilot group. One notable exception is Junli Gu, who mainly worked on deep-learning software for four years at AMD before leaving to build “and lead the machine-learning team at Tesla Autopilot,” according to her online bio.  AMD’s GPU team, acquired in 2006 with ATI Technologies, has a long history of developing ASICs, manly for video game consoles such as the Xbox One X. A follow-up report from CNBC quoted Wall Street analysts saying that the Tesla chip marks a potentially disruptive entry for AMD into the market for silicon for self-driving cars.  Given Tesla’s relatively small volume of car sales, the deal, if true, is not expected to be material to AMD revenues in the near future. However, it could provide a powerful calling card in a market for machine-learning chips that is already becoming crowded with offerings from established and startup companies.
Key word:
Release time:2017-09-22 00:00 reading:1423 Continue reading>>

Turn to

/ 2

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
CDZVT2R20B ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
MC33074DR2G onsemi
RB751G-40T2R ROHM Semiconductor
TL431ACLPR Texas Instruments
model brand To snap up
BP3621 ROHM Semiconductor
ESR03EZPJ151 ROHM Semiconductor
TPS63050YFFR Texas Instruments
IPZ40N04S5L4R8ATMA1 Infineon Technologies
STM32F429IGT6 STMicroelectronics
BU33JA2MNVX-CTL ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code